6 research outputs found

    Disk Cluster Allocation Behavior in Windows and NTFS

    Get PDF
    acceptedVersio

    Using NTFS cluster allocation behavior to find the location of user data

    No full text
    Digital forensics is heavily affected by the large and increasing amount of data to be processed. To solve the problem there is ongoing research to find more efficient carving algorithms, use parallel processing in the cloud, and reduce the amount of data by filtering uninteresting files. Our approach builds on the principle of searching where it is more probable to find what you are looking for. We therefore have empirically studied the behavior of the cluster allocation algorithm(s) in the New Technology File System (NTFS) to see where new data is actually placed on disk. The experiment consisted of randomly writing, increasing, reducing and deleting files in 32 newly installed Windows 7, 8, 8.1 and 10 virtual computers using VirtualBox. The result show that data are (as expected) more frequently allocated closer to the middle of the disk. Hence that area should be getting higher attention during a digital forensic investigation of a NTFS formatted hard disk. Knowledge of the probable position of user data can be used by a forensic investigator to prioritize relevant areas in storage media, without the need for a working file system. It can also be used to increase the efficiency of hash-based carving by dynamically changing the sampling frequency. Our findings also contributes to the digital forensics processes in general, which can now be focused on the interesting regions on storage devices, increasing the probability of getting relevant results faster

    An Empirical Study of the NTFS Cluster Allocation Behavior Over Time

    Get PDF
    The amount of data to be handled in digital forensic investigations is continuously increasing, while the tools and processes used are not developed accordingly. This especially affects the digital forensic subfield of file carving. The use of the structuring of stored data induced by the allocation algorithm to increase the efficiency of the forensic process has been independently suggested by Casey and us. Building on that idea we have set up an experiment to study the allocation algorithm of NTFS and its behavior over time from different points of view. This includes if the allocation algorithm behaves the same regardless of Windows version or size of the hard drive, its adherence to the best fit allocation strategy and the distribution of the allocation activity over the available (logical) storage space. Our results show that space is not a factor, but there are differences in the allocation behavior between Windows 7 and Windows 10. The results also show that the allocation strategy favors filling in holes in the already written area instead of claiming the unused space at the end of a partition and that the area with the highest allocation activity is slowly progressing from approximately 10 GiB into a partition towards the end as the disk is filling u

    Disk Cluster Allocation Behavior in Windows and NTFS

    Get PDF
    The allocation algorithm of a file system has a huge impact on almost all aspects of digital forensics, because it determines where data is placed on storage media. Yet there is only basic information available on the allocation algorithm of the currently most widely spread file system; NTFS. We have therefore studied the NTFS allocation algorithm and its behavior empirically. To do that we used two virtual machines running Windows 7 and 10 on NTFS formatted fixed size virtual hard disks, the first being 64 GiB and the latter 1 TiB in size. Files of different sizes were written to disk using two writing strategies and the $Bitmap files were manipulated to emulate file system fragmentation. Our results show that files written as one large block are allocated areas of decreasing size when the files are fragmented. The decrease in size is seen not only within files, but also between them. Hence a file having smaller fragments than another file is written after the file having larger fragments. We also found that a file written as a stream gets the opposite allocation behavior, i. e. its fragments are increasing in size as the file is written. The first allocated unit of a stream written file is always very small and hence easy to identify. The results of the experiment are of importance to the digital forensics field and will help improve the efficiency of for example file carving and timestamp verification

    Creating a map of user data in NTFS to improve file carving

    No full text
    Digital forensics and, especially, file carving are burdened by the large amounts of data that need to be processed. Attempts to solve this problem include efficient carving algorithms, parallel processing in the cloud and data reduction by filtering uninteresting files. This research addresses the problem by searching for data where it is more likely to be found. This is accomplished by creating a probability map for finding unique data at various logical block addressing positions in storage media. SHA-1 hashes of 512 B sectors are used to represent the data. The results, which are based on a collection of 30 NTFS partitions from computers running Microsoft Windows 7 and later versions, reveal that the mean probability of finding unique hash values at different logical block addressing positions vary between 12% to 41% in an NTFS partition. The probability map can be used by a forensic analyst to prioritize relevant areas in storage media without the need for a working filesystem. It can also be used to increase the efficiency of hash-based carving by dynamically changing the random sampling frequency. The approach contributes to digital forensic processes by enabling them to focus on interesting regions in storage media, increasing the probability of obtaining relevant results faster
    corecore